33,285 research outputs found

    Large Scale Structure Formation of Normal Branch in DGP Brane World Model

    Full text link
    In this paper, we study the large scale structure formation of the normal branch in DGP model (Dvail, Gabadadze and Porrati brane world model) by applying the scaling method developed by Sawicki, Song and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is detectable departure of perturbed gravitational potential from LCDM even at the minimal deviation of the effective equation of state w_eff below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model. Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.Comment: 6 pages, 2 figure

    Violation of monogamy inequality for higher-dimensional objects

    Full text link
    Bipartite quantum entanglement for qutrits and higher-dimensional objects is considered. We analyze the possibility of violation of monogamy inequality, introduced by Coffman, Kundu, and Wootters, for some systems composed of such objects. An explicit counterexample with a three-qutrit totally antisymmetric state is presented. Since three-tangle has been confirmed to be a natural measure of entanglement for qubit systems, our result shows that the three-tangle is no longer a legitimate measure of entanglement for states with three qutrits or higher dimensional objects.Comment: 2.5 pages,minor modifications are mad

    Convolutional Dictionary Learning: Acceleration and Convergence

    Full text link
    Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image Processin
    • …
    corecore